首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4241篇
  免费   456篇
  国内免费   225篇
  2023年   156篇
  2022年   88篇
  2021年   160篇
  2020年   199篇
  2019年   244篇
  2018年   204篇
  2017年   203篇
  2016年   164篇
  2015年   166篇
  2014年   206篇
  2013年   254篇
  2012年   160篇
  2011年   151篇
  2010年   137篇
  2009年   188篇
  2008年   194篇
  2007年   206篇
  2006年   175篇
  2005年   167篇
  2004年   164篇
  2003年   156篇
  2002年   141篇
  2001年   118篇
  2000年   120篇
  1999年   95篇
  1998年   56篇
  1997年   52篇
  1996年   34篇
  1995年   35篇
  1994年   39篇
  1993年   39篇
  1992年   38篇
  1991年   37篇
  1990年   27篇
  1989年   21篇
  1988年   37篇
  1987年   41篇
  1986年   21篇
  1985年   29篇
  1984年   19篇
  1983年   28篇
  1982年   24篇
  1981年   27篇
  1980年   21篇
  1979年   16篇
  1978年   14篇
  1977年   11篇
  1976年   12篇
  1974年   6篇
  1971年   6篇
排序方式: 共有4922条查询结果,搜索用时 275 毫秒
61.
Motivated by investigating the relationship between progesterone and the days in a menstrual cycle in a longitudinal study, we propose a multikink quantile regression model for longitudinal data analysis. It relaxes the linearity condition and assumes different regression forms in different regions of the domain of the threshold covariate. In this paper, we first propose a multikink quantile regression for longitudinal data. Two estimation procedures are proposed to estimate the regression coefficients and the kink points locations: one is a computationally efficient profile estimator under the working independence framework while the other one considers the within-subject correlations by using the unbiased generalized estimation equation approach. The selection consistency of the number of kink points and the asymptotic normality of two proposed estimators are established. Second, we construct a rank score test based on partial subgradients for the existence of the kink effect in longitudinal studies. Both the null distribution and the local alternative distribution of the test statistic have been derived. Simulation studies show that the proposed methods have excellent finite sample performance. In the application to the longitudinal progesterone data, we identify two kink points in the progesterone curves over different quantiles and observe that the progesterone level remains stable before the day of ovulation, then increases quickly in 5 to 6 days after ovulation and then changes to stable again or drops slightly.  相似文献   
62.
Leveraging information in aggregate data from external sources to improve estimation efficiency and prediction accuracy with smaller scale studies has drawn a great deal of attention in recent years. Yet, conventional methods often either ignore uncertainty in the external information or fail to account for the heterogeneity between internal and external studies. This article proposes an empirical likelihood-based framework to improve the estimation of the semiparametric transformation models by incorporating information about the t-year subgroup survival probability from external sources. The proposed estimation procedure incorporates an additional likelihood component to account for uncertainty in the external information and employs a density ratio model to characterize population heterogeneity. We establish the consistency and asymptotic normality of the proposed estimator and show that it is more efficient than the conventional pseudopartial likelihood estimator without combining information. Simulation studies show that the proposed estimator yields little bias and outperforms the conventional approach even in the presence of information uncertainty and heterogeneity. The proposed methodologies are illustrated with an analysis of a pancreatic cancer study.  相似文献   
63.
Kaitlyn Cook  Wenbin Lu  Rui Wang 《Biometrics》2023,79(3):1670-1685
The Botswana Combination Prevention Project was a cluster-randomized HIV prevention trial whose follow-up period coincided with Botswana's national adoption of a universal test and treat strategy for HIV management. Of interest is whether, and to what extent, this change in policy modified the preventative effects of the study intervention. To address such questions, we adopt a stratified proportional hazards model for clustered interval-censored data with time-dependent covariates and develop a composite expectation maximization algorithm that facilitates estimation of model parameters without placing parametric assumptions on either the baseline hazard functions or the within-cluster dependence structure. We show that the resulting estimators for the regression parameters are consistent and asymptotically normal. We also propose and provide theoretical justification for the use of the profile composite likelihood function to construct a robust sandwich estimator for the variance. We characterize the finite-sample performance and robustness of these estimators through extensive simulation studies. Finally, we conclude by applying this stratified proportional hazards model to a re-analysis of the Botswana Combination Prevention Project, with the national adoption of a universal test and treat strategy now modeled as a time-dependent covariate.  相似文献   
64.
Use of historical data and real-world evidence holds great potential to improve the efficiency of clinical trials. One major challenge is to effectively borrow information from historical data while maintaining a reasonable type I error and minimal bias. We propose the elastic prior approach to address this challenge. Unlike existing approaches, this approach proactively controls the behavior of information borrowing and type I errors by incorporating a well-known concept of clinically significant difference through an elastic function, defined as a monotonic function of a congruence measure between historical data and trial data. The elastic function is constructed to satisfy a set of prespecified criteria such that the resulting prior will strongly borrow information when historical and trial data are congruent, but refrain from information borrowing when historical and trial data are incongruent. The elastic prior approach has a desirable property of being information borrowing consistent, that is, asymptotically controls type I error at the nominal value, no matter that historical data are congruent or not to the trial data. Our simulation study that evaluates the finite sample characteristic confirms that, compared to existing methods, the elastic prior has better type I error control and yields competitive or higher power. The proposed approach is applicable to binary, continuous, and survival endpoints.  相似文献   
65.
We study bias-reduced estimators of exponentially transformed parameters in general linear models (GLMs) and show how they can be used to obtain bias-reduced conditional (or unconditional) odds ratios in matched case-control studies. Two options are considered and compared: the explicit approach and the implicit approach. The implicit approach is based on the modified score function where bias-reduced estimates are obtained by using iterative procedures to solve the modified score equations. The explicit approach is shown to be a one-step approximation of this iterative procedure. To apply these approaches for the conditional analysis of matched case-control studies, with potentially unmatched confounding and with several exposures, we utilize the relation between the conditional likelihood and the likelihood of the unconditional logit binomial GLM for matched pairs and Cox partial likelihood for matched sets with appropriately setup data. The properties of the estimators are evaluated by using a large Monte Carlo simulation study and an illustration of a real dataset is shown. Researchers reporting the results on the exponentiated scale should use bias-reduced estimators since otherwise the effects can be under or overestimated, where the magnitude of the bias is especially large in studies with smaller sample sizes.  相似文献   
66.
Analysts often estimate treatment effects in observational studies using propensity score matching techniques. When there are missing covariate values, analysts can multiply impute the missing data to create m completed data sets. Analysts can then estimate propensity scores on each of the completed data sets, and use these to estimate treatment effects. However, there has been relatively little attention on developing imputation models to deal with the additional problem of missing treatment indicators, perhaps due to the consequences of generating implausible imputations. However, simply ignoring the missing treatment values, akin to a complete case analysis, could also lead to problems when estimating treatment effects. We propose a latent class model to multiply impute missing treatment indicators. We illustrate its performance through simulations and with data taken from a study on determinants of children's cognitive development. This approach is seen to obtain treatment effect estimates closer to the true treatment effect than when employing conventional imputation procedures as well as compared to a complete case analysis.  相似文献   
67.
The gold standard for investigating the efficacy of a new therapy is a (pragmatic) randomized controlled trial (RCT). This approach is costly, time-consuming, and not always practicable. At the same time, huge quantities of available patient-level control condition data in analyzable format of (former) RCTs or real-world data (RWD) are neglected. Therefore, alternative study designs are desirable. The design presented here consists of setting up a prediction model for determining treatment effects under the control condition for future patients. When a new treatment is intended to be tested against a control treatment, a single-arm trial for the new therapy is conducted. The treatment effect is then evaluated by comparing the outcomes of the single-arm trial against the predicted outcomes under the control condition. While there are obvious advantages of this design compared to classical RCTs (increased efficiency, lower cost, alleviating participants’ fear of being on control treatment), there are several sources of bias. Our aim is to investigate whether and how such a design—the prediction design—may be used to provide information on treatment effects by leveraging external data sources. For this purpose, we investigated under what assumptions linear prediction models could be used to predict the counterfactual of patients precisely enough to construct a test and an appropriate sample size formula for evaluating the average treatment effect in the population of a new study. A user-friendly R Shiny application (available at: https://web.imbi.uni-heidelberg.de/PredictionDesignR/ ) facilitates the application of the proposed methods, while a real-world application example illustrates them.  相似文献   
68.
The turnover measurement of proteins and proteoforms has been largely facilitated by workflows coupling metabolic labeling with mass spectrometry (MS), including dynamic stable isotope labeling by amino acids in cell culture (dynamic SILAC) or pulsed SILAC (pSILAC). Very recent studies including ours have integrated themeasurement of post-translational modifications (PTMs) at the proteome level (i.e., phosphoproteomics) with pSILAC experiments in steady state systems, exploring the link between PTMs and turnover at the proteome-scale. An open question in the field is how to exactly interpret these complex datasets in a biological perspective. Here, we present a novel pSILAC phosphoproteomic dataset which was obtained during a dynamic process of cell starvation using data-independent acquisition MS (DIA-MS). To provide an unbiased “hypothesis-free” analysis framework, we developed a strategy to interrogate how phosphorylation dynamically impacts protein turnover across the time series data. With this strategy, we discovered a complex relationship between phosphorylation and protein turnover that was previously underexplored. Our results further revealed a link between phosphorylation stoichiometry with the turnover of phosphorylated peptidoforms. Moreover, our results suggested that phosphoproteomic turnover diversity cannot directly explain the abundance regulation of phosphorylation during cell starvation, underscoring the importance of future studies addressing PTM site-resolved protein turnover.  相似文献   
69.
Research data management (RDM) requires standards, policies, and guidelines. Findable, accessible, interoperable, and reusable (FAIR) data management is critical for sustainable research. Therefore, collaborative approaches for managing FAIR-structured data are becoming increasingly important for long-term, sustainable RDM. However, they are rather hesitantly applied in bioengineering. One of the reasons may be found in the interdisciplinary character of the research field. In addition, bioengineering as application of principles of biology and tools of process engineering, often have to meet different criteria. In consequence, RDM is complicated by the fact that researchers from different scientific institutions must meet the criteria of their home institution, which can lead to additional conflicts. Therefore, centrally provided general repositories implementing a collaborative approach that enables data storage from the outset In a biotechnology research network with over 20 tandem projects, it was demonstrated how FAIR-RDM can be implemented through a collaborative approach and the use of a data structure. In addition, the importance of a structure within a repository was demonstrated to keep biotechnology research data available throughout the entire data lifecycle. Furthermore, the biotechnology research network highlighted the importance of a structure within a repository to keep research data available throughout the entire data lifecycle.  相似文献   
70.
DNA microarray technology permits the study of biological systems and processes on a genome-wide scale. Arrays based on cDNA clones, oligonucleotides and genomic clones have been developed for investigations of gene expression, genetic analysis and genomic changes associated with disease. Over the past 3-4 years, microarrays have become more widely available to the research community. This has occurred through increased commercial availability of custom and generic arrays and the development of robotic equipment that has enabled array printing and analysis facilities to be established in academic research institutions. This brief review examines the public and commercial resources, the microarray fabrication and data capture and analysis equipment currently available to the user.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号